19 research outputs found

    Inferring Student Engagement in Collaborative Problem Solving from Visual Cues

    Get PDF
    Automatic analysis of students' collaborative interactions in physical settings is an emerging problem with a wide range of applications in education. However, this problem has been proven to be challenging due to the complex, interdependent and dynamic nature of student interactions in real-world contexts. In this paper, we propose a novel framework for the classification of student engagement in open-ended, face-to-face collaborative problem-solving (CPS) tasks purely from video data. Our framework i) estimates body pose from the recordings of student interactions; ii) combines face recognition with a Bayesian model to identify and track students with a high accuracy; and iii) classifies student engagement leveraging a Team Long Short-Term Memory (Team LSTM) neural network model. This novel approach allows the LSTMs to capture dependencies among individual students in their collaborative interactions. Our results show that the Team LSTM significantly improves the performance as compared to the baseline method that takes individual student trajectories into account independently

    Fully Automatic Analysis of Engagement and Its Relationship to Personality in Human-Robot Interactions

    Get PDF
    Engagement is crucial to designing intelligent systems that can adapt to the characteristics of their users. This paper focuses on automatic analysis and classification of engagement based on humans’ and robot’s personality profiles in a triadic human-human-robot interaction setting. More explicitly, we present a study that involves two participants interacting with a humanoid robot, and investigate how participants’ personalities can be used together with the robot’s personality to predict the engagement state of each participant. The fully automatic system is firstly trained to predict the Big Five personality traits of each participant by extracting individual and interpersonal features from their nonverbal behavioural cues. Secondly, the output of the personality prediction system is used as an input to the engagement classification system. Thirdly, we focus on the concept of “group engagement”, which we define as the collective engagement of the participants with the robot, and analyse the impact of similar and dissimilar personalities on the engagement classification. Our experimental results show that (i) using the automatically predicted personality labels for engagement classification yields an F-measure on par with using the manually annotated personality labels, demonstrating the effectiveness of the automatic personality prediction module proposed; (ii) using the individual and interpersonal features without utilising personality information is not sufficient for engagement classification, instead incorporating the participants’ and robot’s personalities with individual/interpersonal features increases engagement classification performance; and (iii) the best classification performance is achieved when the participants and the robot are extroverted, while the worst results are obtained when all are introverted.This work was performed within the Labex SMART project (ANR-11-LABX-65) supported by French state funds managed by the ANR within the Investissements d’Avenir programme under reference ANR-11-IDEX-0004-02. The work of Oya Celiktutan and Hatice Gunes is also funded by the EPSRC under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref.: EP/L00416X/1).This is the author accepted manuscript. The final version is available from Institute of Electrical and Electronics Engineers via http://dx.doi.org/10.1109/ACCESS.2016.261452

    Automatic detection of cognitive impairment with virtual reality

    Get PDF
    Cognitive impairment features in neuropsychiatric conditions and when undiagnosed can have a severe impact on the affected individual's safety and ability to perform daily tasks. Virtual Reality (VR) systems are increasingly being explored for the recognition, diagnosis and treatment of cognitive impairment. In this paper, we describe novel VR-derived measures of cognitive performance and show their correspondence with clinically-validated cognitive performance measures. We use an immersive VR environment called VStore where participants complete a simulated supermarket shopping task. People with psychosis (k=26) and non-patient controls (k=128) participated in the study, spanning ages 20-79 years. The individuals were split into two cohorts, a homogeneous non-patient cohort (k=99 non-patient participants) and a heterogeneous cohort (k=26 patients, k=29 non-patient participants). Participants' spatio-temporal behaviour in VStore is used to extract four features, namely, route optimality score, proportional distance score, execution error score, and hesitation score using the Traveling Salesman Problem and explore-exploit decision mathematics. These extracted features are mapped to seven validated cognitive performance scores, via linear regression models. The most statistically important feature is found to be the hesitation score. When combined with the remaining extracted features, the multiple linear regression model resulted in statistically significant results with R2 = 0.369, F-Stat = 7.158, p(F-Stat) = 0.000128

    Personality perception of robot avatar tele-operators

    Get PDF
    © 2016 IEEE. Nowadays a significant part of human-human interaction takes place over distance. Tele-operated robot avatars, in which an operator's behaviours are portrayed by a robot proxy, have the potential to improve distance interaction, e.g., improving social presence and trust. However, having communication mediated by a robot changes the perception of the operator's appearance and behaviour, which have been shown to be used alongside vocal cues in judging personality. In this paper we present a study that investigates how robot mediation affects the way the personality of the operator is perceived. More specifically, we aim to investigate if judges of personality can be consistent in assessing personality traits, can agree with one another, can agree with operators' self-assessed personality, and shift their perceptions to incorporate characteristics associated with the robot's appearance. Our experiments show that (i) judges utilise robot appearance cues along with operator vocal cues to make their judgements, (ii) operators' arm gestures reproduced on the robot aid personality judgements, and (iii) how personality cues are perceived and evaluated through speech, gesture and robot appearance is highly operator-dependent. We discuss the implications of these results for both tele-operated and autonomous robots that aim to portray personality.This work was funded by the EPSRC under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref: EP/L00416X/1).This is the author accepted manuscript. The final version is available from IEEE via http://dx.doi.org/ 10.1109/HRI.2016.745174

    Personality Perception of Robot Avatar Teleoperators in Solo and Dyadic Tasks

    Get PDF
    Humanoid robot avatars are a potential new telecommunication tool, whereby a user is remotely represented by a robot that replicates their arm, head, and possible face movements. They have been shown to have a number of benefits over more traditional media such as phones or video calls. However, using a teleoperated humanoid as a communication medium inherently changes the appearance of the operator, and appearance-based stereotypes are used in interpersonal judgments (whether consciously or unconsciously). One such judgment that plays a key role in how people interact is personality. Hence, we have been motivated to investigate if and how using a robot avatar alters the perceived personality of teleoperators. To do so, we carried out two studies where participants performed 3 communication tasks, solo in study one and dyadic in study two, and were recorded on video both with and without robot mediation. Judges recruited using online crowdsourcing services then made personality judgments of the participants in the video clips. We observed that judges were able to make internally consistent trait judgments in both communication conditions. However, judge agreement was affected by robot mediation, although which traits were affected was highly task dependent. Our most important finding was that in dyadic tasks personality trait perception was shifted to incorporate cues relating to the robot’s appearance when it was used to communicate. Our findings have important implications for telepresence robot design and personality expression in autonomous robots.This work was funded by the EPSRC under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref: EP/L00416X/1)

    Learning to self-manage by intelligent monitoring, prediction and intervention

    Get PDF
    Despite the growing prevalence of multimorbidities, current digital self-management approaches still prioritise single conditions. The future of outof- hospital care requires researchers to expand their horizons; integrated assistive technologies should enable people to live their life well regardless of their chronic conditions. Yet, many of the current digital self-management technologies are not equipped to handle this problem. In this position paper, we suggest the solution for these issues is a model-aware and data-agnostic platform formed on the basis of a tailored self-management plan and three integral concepts - Monitoring (M) multiple information sources to empower Predictions (P) and trigger intelligent Interventions (I). Here we present our ideas for the formation of such a platform, and its potential impact on quality of life for sufferers of chronic conditions

    A cloud-based robot system for long-term interaction: principles, implementation, lessons learned

    Get PDF
    Making the transition to long-term interaction with social-robot systems has been identified as one of the main challenges in human-robot interaction. This article identifies four design principles to address this challenge and applies them in a real-world implementation: cloud-based robot control, a modular design, one common knowledge base for all applications, and hybrid artificial intelligence for decision making and reasoning. The control architecture for this robot includes a common Knowledge-base (ontologies), Data-base, “Hybrid Artificial Brain” (dialogue manager, action selection and explainable AI), Activities Centre (Timeline, Quiz, Break and Sort, Memory, Tip of the Day, ), Embodied Conversational Agent (ECA, i.e., robot and avatar), and Dashboards (for authoring and monitoring the interaction). Further, the ECA is integrated with an expandable set of (mobile) health applications. The resulting system is a Personal Assistant for a healthy Lifestyle (PAL), which supports diabetic children with self-management and educates them on health-related issues (48 children, aged 6–14, recruited via hospitals in the Netherlands and in Italy). It is capable of autonomous interaction “in the wild” for prolonged periods of time without the need for a “Wizard-of-Oz” (up until 6 months online). PAL is an exemplary system that provides personalised, stable and diverse, long-term human-robot interaction
    corecore